Creators of Intelligence by Dr. Alex Antic

Creators of Intelligence by Dr. Alex Antic

Author:Dr. Alex Antic
Language: eng
Format: epub
Publisher: Packt Publishing Pvt. Ltd.
Published: 2022-04-28T00:00:00+00:00


Measuring success

We all too often hear of data science/AI projects having high failure rates, approximately 85 percent. Do you think this is the reality, and if so, why is it the case? After all, significant money is spent on people and technology, and we’re drowning in data, so what’s the problem?

NVO: The reality is that most AI projects fail. There are several reasons, some related to AI, some not. Constructing AI/optimization solutions is hard, and more often than not takes months, if not years, to develop, test, scale, and put into production. And I’m not even talking about the ongoing maintenance and upgrades it needs. So, unless you can do this for a problem where the ROI is provably really big, the chances that a company can accomplish and support an AI project are slim. I’ve seen sound projects in AI that were torpedoed simply because a director was replaced and his successor wanted to wipe out the past.

With AI projects, you’re in it for the long term.

There are other reasons for failure.

Most teams focus too much on the solution and not the problem.

For instance, they want to obtain state-of-the-art solutions that are not applicable to the company they are working for. Or they like their new shiny toys (computers, libraries, tools, and so on) too much and want to use all the power of their machines/solvers just to justify a budget planned in advance but without any relation to the real needs of an optimal solution. Or they don’t talk with the people that will use their tools. These are known problems, and they are not specific to AI, although the hype around it and its complexity makes companies particularly prone to such errors.

But there are at least two main errors that are specific to AI. The first one is the belief that AI can magically solve everything. Companies keep data lakes with the conviction of being able to use this gold mine later on. This is rarely the case because, in most cases, we don’t know what to do with all this scattered information. Collecting relevant information is an integral part of the problem to be solved. There is this huge problem that you often try/test a toy problem on data that does not correspond to reality, or if it does, your situation will change, and this will be reflected in new types of datasets for which the models were not trained/don’t correspond anymore. Furthermore, if you don’t talk with the experts in the domain, AI experts will probably find non-existent correlations that they will interpret as causalities. You need hard work and domain knowledge to interpret results.

The second error is even worse. For most projects, the tools and approaches are simply wrong. In particular, what most companies need are prescriptive approaches, not predictive approaches. It is nice to be able to predict, even essential for many companies, but you want more than that: you want to be able to act on those predictions.

You need robust solutions that



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.